healthcare ai
A Model-Driven Engineering Approach to AI-Powered Healthcare Platforms
Raheem, Mira, Elgammal, Amal, Papazoglou, Michael, Krämer, Bernd, El-Tazi, Neamat
Artificial intelligence (AI) has the potential to transform healthcare by supporting more accurate diagnoses and personalized treatments. However, its adoption in practice remains constrained by fragmented data sources, strict privacy rules, and the technical complexity of building reliable clinical systems. To address these challenges, we introduce a model driven engineering (MDE) framework designed specifically for healthcare AI. The framework relies on formal metamodels, domain-specific languages (DSLs), and automated transformations to move from high level specifications to running software. At its core is the Medical Interoperability Language (MILA), a graphical DSL that enables clinicians and data scientists to define queries and machine learning pipelines using shared ontologies. When combined with a federated learning architecture, MILA allows institutions to collaborate without exchanging raw patient data, ensuring semantic consistency across sites while preserving privacy. We evaluate this approach in a multi center cancer immunotherapy study. The generated pipelines delivered strong predictive performance, with support vector machines achieving up to 98.5 percent and 98.3 percent accuracy in key tasks, while substantially reducing manual coding effort. These findings suggest that MDE principles metamodeling, semantic integration, and automated code generation can provide a practical path toward interoperable, reproducible, and trustworthy digital health platforms.
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.05)
- Europe > Germany > Brandenburg > Potsdam (0.04)
- Europe > Latvia > Riga Municipality > Riga (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.88)
- Health & Medicine > Therapeutic Area > Oncology (0.67)
- Health & Medicine > Health Care Technology > Medical Record (0.46)
Agentic-AI Healthcare: Multilingual, Privacy-First Framework with MCP Agents
Abstract--This paper introduces Agentic-AI Healthcare, a privacy-aware, multilingual, and explainable research prototype developed as a single-investigator project. The platform integrates a dedicated Privacy & Compliance Layer that applies role-based access control (RBAC), AES-GCM field-level encryption, and tamper-evident audit logging, aligning with major healthcare data protection standards such as HIPAA (US), PIPEDA (Canada), and PHIPA (Ontario). Example use cases demonstrate multilingual patient-doctor interaction (English, French, Arabic) and transparent diagnostic reasoning powered by large language models. As an applied AI contribution, this work highlights the feasibility of combining agentic orchestration, multilingual accessibility, and compliance-aware architecture in healthcare applications. This platform is presented as a research prototype and is not a certified medical device. This paper presents a working prototype that integrates agentic orchestration via the Model Context Protocol (MCP), field-level encryption, and multilingual LLM agents into a single compliance-aware stack for healthcare.
- North America > Canada > Ontario (0.25)
- North America > United States (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
The Strategic Imperative for Healthcare Organizations to Build Proprietary Foundation Models
This paper presents a comprehensive analysis of the strategic imperative for healthcare organizations to develop proprietary foundation models rather than relying exclusively on commercial alternatives. We examine four fundamental considerations driving this imperative: the domain-specific requirements of healthcare data representation, critical data sovereignty and governance considerations unique to healthcare, strategic competitive advantages afforded by proprietary AI infrastructure, and the transformative potential of healthcare-specific foundation models for patient care and organizational operations. Through analysis of empirical evidence, economic frameworks, and organizational case studies, we demonstrate that proprietary multimodal foundation models enable healthcare organizations to achieve superior clinical performance, maintain robust data governance, create sustainable competitive advantages, and accelerate innovation pathways. While acknowledging implementation challenges, we present evidence showing organizations with proprietary AI capabilities demonstrate measurably improved outcomes, faster innovation cycles, and stronger strategic positioning in the evolving healthcare ecosystem. This analysis provides healthcare leaders with a comprehensive framework for evaluating build-versus-buy decisions regarding foundation model implementation, positioning proprietary foundation model development as a cornerstone capability for forward-thinking healthcare organizations.
- Europe (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Pennsylvania (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Justice in Healthcare Artificial Intelligence in Africa
Ochasi, Aloysius, Mahamadou, Abdoul Jalil Djiberou, Altman, Russ B.
There is an ongoing debate on balancing the benefits and risks of artificial intelligence (AI) as AI is becoming critical to improving healthcare delivery and patient outcomes. Such improvements are essential in resource-constrained settings where millions lack access to adequate healthcare services, such as in Africa. AI in such a context can potentially improve the effectiveness, efficiency, and accessibility of healthcare services. Nevertheless, the development and use of AI-driven healthcare systems raise numerous ethical, legal, and socio-economic issues. Justice is a major concern in AI that has implications for amplifying social inequities. This paper discusses these implications and related justice concepts such as solidarity, Common Good, sustainability, AI bias, and fairness. For Africa to effectively benefit from AI, these principles should align with the local context while balancing the risks. Compared to mainstream ethical debates on justice, this perspective offers context-specific considerations for equitable healthcare AI development in Africa.
- North America > United States > California > Santa Clara County > Stanford (0.05)
- Africa > Sub-Saharan Africa (0.05)
- North America > United States > North Carolina > Pitt County > Greenville (0.04)
- (8 more...)
The secret to healthcare AI is ... human beings
Nobody goes to the circus to see the net. But when the high-wire gymnast slips, the net is suddenly the star of the show. So, don't think of me as being in the health insurance industry, the safety net of your life. I don't want you to stop reading. I'll tell you I'm an expert in customer service and have been persistently challenged by innovators in the retail and tech space who have conditioned the consumer to expect instant responses, instant results and instant products.
- Health & Medicine (1.00)
- Banking & Finance > Insurance (1.00)
Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI
Ahmad, Muhammad Aurangzeb, Yaramis, Ilker, Roy, Taposh Dutta
Large language models have proliferated across multiple domains in as short period of time. There is however hesitation in the medical and healthcare domain towards their adoption because of issues like factuality, coherence, and hallucinations. Give the high stakes nature of healthcare, many researchers have even cautioned against its usage until these issues are resolved. The key to the implementation and deployment of LLMs in healthcare is to make these models trustworthy, transparent (as much possible) and explainable. In this paper we describe the key elements in creating reliable, trustworthy, and unbiased models as a necessary condition for their adoption in healthcare. Specifically we focus on the quantification, validation, and mitigation of hallucinations in the context in healthcare. Lastly, we discuss how the future of LLMs in healthcare may look like.
AI and the Training of Healthcare Workers - Digital Salutem
With the rise of AI in healthcare, there's been a lot of talk about how it will affect the job market. But what does that mean for people who work in healthcare? The truth is, we don't know for sure yet. But what's clear is that artificial intelligence will change how we approach training healthcare workers. With AI, we can provide more targeted training methods than ever before.
Healthcare AI is advancing rapidly, so why aren't Americans noticing the progress?
Check out all the on-demand sessions from the Intelligent Security Summit here. There's no doubt that artificial intelligence (AI) in healthcare had a very successful year. Back in October, the FDA added 178 AI-enabled devices to its list of 500 AI technologies that are approved for medical use. Topping the list for most approved devices were two massive players in the healthcare technology space: GE Healthcare, with 42 authorized AI devices, and Siemens, with 29. Together, the two companies accounted for nearly 40% of the new devices that made the list.
Healthcare AI is advancing rapidly, so why aren't Americans noticing the progress? - Jack Of All Techs
They already are, but may not realize it since many tools are used by clinicians behind the scenes in radiology and imaging, explained Peter Shen, head of digital health at Siemens Healthineers North America. But increasing personalized medical care by using AI tools is something Siemens is continuing to refine and prioritize. "Our strategy for AI goes beyond imaging and pattern recognition," Shen said. "The informed diagnostics we derive from AI allow us to design better ways to take care of patients. For us, it is about more than efficiency and more than just decision-making. We want to start to drive personalized medicine toward the patients themselves and create accessibility in medical care."
Developing trust in healthcare AI, step by step
A new Chilmark Research report by Dr. Jody Ranck, the firm's senior analyst, explores state-of-the-art processes for bias and risk mitigation in artificialI that can be used to develop more trustworthy machine learning tools for healthcare. WHY IT MATTERS As the usage of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in clinical settings. AI products and services have the potential to determine who gets what form of medical care and when – so stakes are high when algorithms are deployed, as Chilmark's 2022 "AI and Trust in Healthcare Report," published September 13, explains. Growth in enterprise-level augmented and artificial intelligence has touched population health research, clinical practice, emergency room management, health system operations, revenue cycle management, supply chains and more. Efficiencies and cost-savings that AI can help organizations realize are driving that array of use cases, along with deeper insights into clinical patterns that machine learning can surface.
- Research Report > New Finding (0.36)
- Research Report > Experimental Study (0.36)